在神经元网络中,使用本地信息单独更新,允许完全分散的学习。相反,人工神经网络(ANN)中的元件通常使用中央处理器同时更新。在这里,我们调查最近引入的分散,物理驱动的学习网络中异步学习的可行性和影响。我们表明,在理想化模拟中,Desynchization Learing Processe不会降低各种任务的性能。在实验中,Des同步实际上通过允许系统更好地探索解决方案的离散状态空间来实现性能。我们在随机梯度下降中的异步和迷你批处理之间绘制了类比,并表明它们对学习过程具有类似的影响。 des同步学习过程将物理驱动的学习网络建立为真正完全分布式的学习机器,在部署中提高更好的性能和可扩展性。
translated by 谷歌翻译
机器学习(ML)通常被视为一种黑盒回归技术,无法提供相当大的科学见解。 ML模型是通用函数近似器,如果正确使用,则可以提供与用于拟合的地面数据集有关的科学信息。 ML比参数模型的好处是,没有预定义的基础函数限制可以建模的现象。在这项工作中,我们在三个数据集上开发了ML模型:太空环境技术(SET)高精度卫星阻力模型(HASDM)密度数据库,这是Jacchia-Bowman 2008经验热层密度模型(JB2008),Jacchia-Bowman 2008经验的空间端段匹配数据集,以及具有挑战性的Minisatellite有效载荷(Champ)的加速度计衍生的密度数据集。将这些ML模型与海军研究实验室质谱仪和不相互分的散射雷达(NRLMSIS 2.0)模型进行比较,以研究中热层中传感后冷却的存在。我们发现NRLMSIS 2.0和JB2008-ML都不能说明后冷却,因此在强烈的地磁风暴(例如2003年万圣节风暴)之后的时期内表现不佳。相反,HASDM-ML和Champ-ML确实显示了传感后冷却的证据,表明这种现象存在于原始数据集中。结果表明,根据位置和暴风雨强度,速度1-3天的密度降低可能会发生1--3天。
translated by 谷歌翻译
了解潮汐能流中鱼类的丰度和分布对于评估通过向栖息地引入潮汐能设备所带来的风险很重要。但是,适合潮汐能的潮汐电流流量通常是高度湍流的,这使回声器数据的解释变得复杂。必须从用于生物分析的数据中排除受夹带空气回报污染的水柱的部分。应用单个常规算法来识别夹带的空气的深度不足,对于不连续,深度动态,多孔的边界而言,随着潮流流速而变化。使用Fundy湾的潮汐能示威场所进行的案例研究,我们描述了具有基于U-NET的体系结构的深机学习模型的开发和应用。我们的模型Echofilter对湍流条件的动态范围高度响应,并且对边界位置的细微差别敏感,产生了夹带的空气边界线,在移动下降方面的平均误差为0.33亿,并且在移动下降范围内为0.5-1.5-1.0m关于固定的上调数据,不到现有算法解决方案的一半。该模型的整体注释与人类细分有很高的一致性,移动下降记录的联合会得分为99%,而固定的上方录音记录为92-95%。与手动编辑当前可用算法所需的线路位置所需的时间相比,手动编辑所需的时间减少了50%。由于最初的自动放置的改进,模型的实现允许提高线路位置的标准化和可重复性。
translated by 谷歌翻译
我们考虑在下一个成本和约束函数的预测存在下对在线凸优化的一般问题。通过将具有预测自适应动态步骤组合的跟随 - 正则化的引导迭代来设计一种新的原始双向算法。该算法实现$ \ mathcal o(t ^ {\ frac {3- \ beta} {4})$后悔和$ \ mathcal o(t ^ {\ frac {1+ \ beta} {2})$约束通过参数$ \ beta \!\ in \![1/2,1)$可调的违规界限,并且具有与预测质量缩小的恒定因素,实现最终$ \ mathcal o(1)$遗憾的完美预测。我们的工作扩展了这个约束OCO设置的FTRL框架,并优于基于最先进的贪婪的解决方案,而不会对预测质量,成本函数或约束的几何形状的条件突出,而不是凸出的。
translated by 谷歌翻译
蛋白质 - 蛋白质相互作用(PPI)对正常细胞功能至关重要,并且与许多疾病途径有关。然而,只有4%的PPI用PTMS在诸如完整的生物知识数据库中的PTM,主要通过手动策策进行,这既不是时间也不是成本效益。我们使用完整的PPI数据库创建具有交互蛋白对,它们相应的PTM类型和来自PubMed数据库的相关摘要注释的远程监督数据集。我们训练Biobert Models的一组合 - 配音PPI-Biobert-X10,以提高置信度校准。我们利用集合平均置信度方法的使用,置信范围抵消了类别不平衡提取高信任预测的影响。在测试集上评估的PPI-BIOBERT-X10模型导致适用的F1-MICRO 41.3(P = 5 8.1,R = 32.1)。然而,通过结合高信心和低变化来识别高质量的预测,调整精度预测,我们保留了100%精度的19%的测试预测。我们评估了1800万PubMed摘要的PPI-Biobert-X10,提取了160万(546507个独特的PTM-PPI三联网)PTM-PPI预测,并过滤〜5700(4584个独一无二)的高信心预测。在5700中,对于小型随机采样的子集进行人体评估表明,尽管置信度校准,精度降至33.7%,并突出了即使在置信度校准的情况下超出了测试集中的最长途的挑战。我们仅包括与多个论文相关的预测的问题来规避问题,从而将精确提高到58.8%。在这项工作中,我们突出了深入学习的文本挖掘在实践中的利益和挑战,并且需要增加对置信校准的强调,以促进人类策划努力。
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
In large-scale machine learning, recent works have studied the effects of compressing gradients in stochastic optimization in order to alleviate the communication bottleneck. These works have collectively revealed that stochastic gradient descent (SGD) is robust to structured perturbations such as quantization, sparsification, and delays. Perhaps surprisingly, despite the surge of interest in large-scale, multi-agent reinforcement learning, almost nothing is known about the analogous question: Are common reinforcement learning (RL) algorithms also robust to similar perturbations? In this paper, we investigate this question by studying a variant of the classical temporal difference (TD) learning algorithm with a perturbed update direction, where a general compression operator is used to model the perturbation. Our main technical contribution is to show that compressed TD algorithms, coupled with an error-feedback mechanism used widely in optimization, exhibit the same non-asymptotic theoretical guarantees as their SGD counterparts. We then extend our results significantly to nonlinear stochastic approximation algorithms and multi-agent settings. In particular, we prove that for multi-agent TD learning, one can achieve linear convergence speedups in the number of agents while communicating just $\tilde{O}(1)$ bits per agent at each time step. Our work is the first to provide finite-time results in RL that account for general compression operators and error-feedback in tandem with linear function approximation and Markovian sampling. Our analysis hinges on studying the drift of a novel Lyapunov function that captures the dynamics of a memory variable introduced by error feedback.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Research on automated essay scoring has become increasing important because it serves as a method for evaluating students' written-responses at scale. Scalable methods for scoring written responses are needed as students migrate to online learning environments resulting in the need to evaluate large numbers of written-response assessments. The purpose of this study is to describe and evaluate three active learning methods than can be used to minimize the number of essays that must be scored by human raters while still providing the data needed to train a modern automated essay scoring system. The three active learning methods are the uncertainty-based, the topological-based, and the hybrid method. These three methods were used to select essays included as part of the Automated Student Assessment Prize competition that were then classified using a scoring model that was training with the bidirectional encoder representations from transformer language model. All three active learning methods produced strong results, with the topological-based method producing the most efficient classification. Growth rate accuracy was also evaluated. The active learning methods produced different levels of efficiency under different sample size allocations but, overall, all three methods were highly efficient and produced classifications that were similar to one another.
translated by 谷歌翻译